trust and transparency
Trust and Transparency in Recommender Systems
Siepmann, Clara, Chatti, Mohamed Amine
Trust is long recognized to be an important factor in Recommender Systems (RS). However, there are different perspectives on trust and different ways to evaluate it. Moreover, a link between trust and transparency is often assumed but not always further investigated. In this paper we first go through different understandings and measurements of trust in the AI and RS community, such as demonstrated and perceived trust. We then review the relationsships between trust and transparency, as well as mental models, and investigate different strategies to achieve transparency in RS such as explanation, exploration and exploranation (i.e., a combination of exploration and explanation). We identify a need for further studies to explore these concepts as well as the relationships between them.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > Germany (0.04)
- Europe > Italy > Apulia > Bari (0.04)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (0.94)
- Information Technology > Human Computer Interaction > Interfaces (0.70)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.65)
Is your company failing at chatbot AI trust and transparency?
Chatbot adoption is projected to increase more than 100% over the next two years.1 For marketing or operations leaders, chances are good your organization has one or more chatbots deployed, and more in the works. But those chatbots may expose you to unforeseen risks, especially in the areas of trust and transparency. Why does this matter now? AI-infused technologies like chatbots are increasingly in the public eye and under scrutiny.2
- Law (0.81)
- Information Technology (0.56)
IBM Brings Artificial Intelligence At Scale To The Marketing And Media Industry
IBM (NYSE: IBM) today announced three new products to add to its growing suite of AI solutions for brand and publishers. The new capabilities are privacy-forward and designed to allow brands to reach consumers while considering user privacy. IBM intends to work with industry leaders, including Xandr/AT&T, Magnite, Nielsen, MediaMath, LiveRamp and Beeswax to help scale the use of AI across the industry. The announcement was made this morning at Advertising Week's digital-first virtual event #AW2020. "While the advertising industry strives to re-emerge strong from the global economic and societal issues we faced this year, it's also deep in the throes of a major transformation with changes to mobile identity, certain elimination of third-party cookies, compliance and regulatory shifts and increased demand for trust and transparency," said Bob Lord, SVP, Cognitive Applications and Blockchain, IBM.
- Information Technology (1.00)
- Media > News (0.40)
The secret to winning the AI race
When it comes to AI based innovation, companies the world over are vying for competitive advantage. However, when it comes to gaining the upper hand, many have written off European companies solely on the basis of stricter privacy laws. But is this really the case? Or could a privacy focused attitude be the secret to winning the AI race? Michael Ingrassia, president and general counsel at Truata tells us more.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
Going for trust and transparency in AI
At a time when artificial intelligence (AI) is moving from the realm of fantasy to practical application, NuEnergy.ai Healthcare Lead Orchid Jahanshahi says the issues of trust and transparency are becoming critical. In a recent interview with ITWC President Fawn Annan conducted as part of the CMO Talks podcast series, Jahanshahi noted many banks are now using AI to decide the credit worthiness of customers and ultimately decide to whom they'll offer loans. That may be a huge advance on some front, but she warned that unless the AI algorithms are designed on certain parameters, the system could be biased and might unfairly exclude some customers. "Machine learning can eventually surpass our human ability to catch up. But some oversight is needed to make sure no consequences in terms of bias or lack of transparency occur. Jahanshahi extended the discussion around trust and transparency into the pharmaceuticals industry. "In pharma, it's not enough to discuss features and benefits of a drug.
Precision Regulation for Artificial Intelligence
Among companies building and deploying artificial intelligence, and the consumers making use of this technology, trust is of paramount importance. Companies want the comfort of knowing how their AI systems are making determinations, and that they are in compliance with any relevant regulations, and consumers want to know when the technology is being used and how (or whether) it will impact their lives. Source: Morning Consult study conducted on behalf of the IBM Policy Lab, January 2020. As outlined in our Principles for Trust and Transparency, IBM has long argued that AI systems need to be transparent and explainable. That's one reason why we supported the OECD AI Principles, and in particular the need to "commit to transparency and responsible disclosure" in the use of AI systems. Principles are admirable and can help communicate a company's commitments to citizens and consumers.
- Asia > Middle East > Jordan (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- Information Technology > Security & Privacy (0.71)
- Law > Statutes (0.65)
Trust and transparency for your machine learning models with Watson OpenScale
This tutorial is part of the Getting started with Watson OpenScale learning path. In this tutorial, you'll see how IBM Watson OpenScale can be used to monitor your artificial intelligence (AI) models for fairness and accuracy. You'll get a hands-on look at how Watson OpenScale will automatically generate a debiased model endpoint to mitigate your fairness issues and provides an explainability view to help you understand how your model makes its predictions. In addition, you'll see how Watson OpenScale uses drift detection. Drift detection will tell you when runtime data is inconsistent with your training data or if there is an increase the data that is likely to lead to lower accuracy.
Watson OpenScale: Promoting trust and transparency when climbing the AI ladder
Climbing the AI ladder: How does that affect my business? Businesses love the idea of putting data to work. Building and scaling AI with trust and transparency -- sounds great, right? As enterprises adopt machine learning to streamline customer service and remedial tasks, their employees can provide better customer experience while freeing themselves up to work on more interesting problems. IBM leads the industry in empowering enterprises to accelerate the journey to AI.
IBM research lead: Artificial intelligence must be clear, trustworthy to succeed
IBM has seen the greatest adoption of artificial intelligence systems across industries in customer service applications. Clear, transparent and explainable outcomes are needed for AI systems to succeed widely. The financial services, retail, manufacturing, energy and utilities sectors are leading the business adoption of AI. Ruchir Puri, chief scientist at IBM Research, recently spoke to S&P Global Market Intelligence about developing trusted AI systems and how IBM is educating companies about ethical uses of the technology. Until February, Puri served as CTO and chief architect of supercomputer IBM Watson.
Trust and transparency of AI for the enterprise
Ruchir Puri is an IBM Fellow and the Chief Architect of IBM Watson. Dr. Puri led Deep Learning and Machine Learning Platform Initiative at IBM Research and also led IBM's efforts in software-hardware acceleration for cognitive and analytic workloads and drove strategy for differentiated cognitive computing infrastructure. He is a Fellow of the IEEE, an ACM Distinguished Speaker, an IEEE Distinguished Lecturer, and was awarded 2014 Asian American Engineer of the Year. Ruchir has been a visiting faculty at Dept. of Computer Science, Stanford Univ...